← Back to Contents
Note: This page's design, presentation and content have been created and enhanced using Claude (Anthropic's AI assistant) to improve visual quality and educational experience.
Week 6 • Sub-Lesson 1

✍️ Writing as Thinking

Why the struggle of writing is not a problem to be solved — it is the process through which researchers develop, test, and refine their ideas

What We'll Cover

This week we turn from how AI changes the way you find and evaluate research (Week 5) to how it changes the way you write and communicate that research. We begin with what might be the most important idea of the entire week: writing is not just output. It is a cognitive process — a form of thinking that cannot be separated from the ideas it produces.

When you struggle to put an argument into words, that struggle is not wasted effort. It is the mechanism by which you discover what you actually think, where your reasoning breaks down, and what you still need to learn. This is not a romantic notion about the craft of writing. It is a claim backed by cognitive science, and it has profound implications for how you use AI writing tools.

In this session, we will examine the cognitive science behind writing-as-thinking, look at emerging research on what happens when AI disrupts this process, map out a spectrum of AI assistance from low-risk to high-risk, and connect all of this back to the virtue ethics framework you developed in Week 4. The goal is not to scare you away from AI writing tools — it is to help you use them in ways that make you a better thinker, not a better delegator.

🧠 Writing IS Thinking

The relationship between writing and thinking is not merely correlational — it is constitutive. You do not first think clearly and then write down what you have thought. The act of writing itself forces you to clarify vague intuitions, test the logical connections between ideas, and confront gaps in your understanding that were invisible while the ideas remained in your head.

⚙️ The Cognitive Science of Writing-as-Thinking

Cognitive scientists have long understood that writing is not a transcription activity but a knowledge-transforming activity. When you write, you are forced to linearise thoughts that exist as messy, interconnected webs in your mind. This linearisation process — choosing what comes first, what follows, what supports what — is itself a form of reasoning. You cannot organise an argument on paper without first organising it in your mind, and the attempt to organise it on paper frequently reveals that your mental organisation was less coherent than you believed.

Writing also imposes a form of self-explanation. When you write for a reader (even an imagined one), you must make explicit the assumptions, definitions, and logical steps that you take for granted in internal thought. This externalisation is where many of the deepest insights occur. A researcher who says "I knew this, but I only really understood it when I had to write it down" is describing a genuine cognitive phenomenon, not a figure of speech.

The implication is significant: if you skip the writing, you skip the thinking. A perfectly polished paragraph produced by AI may communicate an idea effectively to a reader, but it has not put you through the cognitive process that would have deepened your own understanding of that idea. The product looks the same; the intellectual development is entirely absent.

Recent research has begun to quantify what happens when AI disrupts this cognitive process. The findings are sobering.

🧠 Reduced Cognitive Engagement

A 2025 article in the Harvard Gazette reported on research exploring whether AI use is associated with reduced cognitive effort. The concern is that when AI handles the heavy lifting of formulating ideas into prose, users engage in less deep processing — the kind of effortful thinking that leads to genuine understanding and long-term retention. The question the researchers raise is direct: is AI making us intellectually lazier?

📈 Brain Activity and ChatGPT

Researchers at the MIT Media Lab conducted a study ("Your Brain on ChatGPT") measuring neural activity in participants while they used ChatGPT for writing tasks. The study found that ChatGPT users showed the lowest levels of brain engagement compared to control groups, with reduced alpha and beta connectivity indicating under-engagement. The sample size was relatively small (54 participants) and the findings should be interpreted with appropriate caution. But they align with the broader theoretical concern: offloading the cognitive work of writing to AI may reduce the depth of thinking that writers engage in.

💡 The key insight: The struggle of writing is not a bug — it is the feature. When you stare at a blank page and wrestle with how to express a complex idea, that discomfort is the feeling of your mind doing its most demanding work. The difficulty IS the learning. If writing feels easy, you are probably not pushing yourself intellectually. And if AI makes writing feel effortless, you should ask what cognitive work you are no longer doing.

👫 The Cognitive Dissonance

Beyond the question of reduced thinking, researchers are documenting a specific psychological tension that emerges when students and academics use AI for writing — a form of cognitive dissonance that goes to the heart of scholarly identity.

📰 The Dissonance Between "Sounds Better" and "Sounds Like Me"

A 2025 study published in Frontiers in AI conducted a conceptual exploration of what the authors call "generative AI-induced cognitive dissonance" in university-level academic writing. The researchers identified a specific tension: students recognise that AI-generated text often sounds more polished and professional than their own writing, yet they also feel that this text does not represent their thinking, their voice, or their intellectual development.

This is not a trivial discomfort. For researchers, your writing voice is inseparable from your intellectual identity. The way you frame a problem, the analogies you reach for, the hedging language you use — these reflect how you think, not just what you think. When AI produces text that is technically superior but intellectually hollow (from your perspective), you are faced with an uncomfortable choice: submit something that sounds better but is not yours, or submit something that sounds worse but represents genuine intellectual engagement.

The study argues that this dissonance, if unresolved, can lead researchers to gradually cede more writing to AI — not because they believe it produces better scholarship, but because the polished output creates social pressure to match that standard, even at the cost of genuine engagement with ideas.

The First Draft Trap

One of the most common uses of AI in academic writing is generating a first draft. On the surface, this seems efficient: let AI produce a rough version, then edit it into shape. But there is a fundamental problem with this approach that goes beyond questions of originality.

The Dependency Risk

A 2025 synthesis published in Frontiers in Education reviewed recent evidence (2023–2025) on the impact of generative AI on academic reading and writing. Among the patterns identified was a concerning trend toward student dependency on AI outputs, coupled with surface-level engagement with the material. Students who regularly use AI for writing tasks showed reduced willingness to engage in the kind of deep, sustained, often uncomfortable cognitive work that characterises genuine academic inquiry.

⚠️ The dependency spiral: The more you use AI for writing, the less practice you get at the cognitive work of writing. The less practice you get, the harder writing feels. The harder writing feels, the more tempting it is to use AI. This is not a slippery slope argument — it is a well-established pattern in skill development. Skills that are not practised atrophy. The question is not whether you can use AI to write. The question is what cognitive capacities you lose when you do.

📐 A Spectrum of AI Assistance

Not all uses of AI in writing are equal. There is a meaningful difference between using AI to fix your typos and using AI to generate your arguments. The following spectrum maps out different levels of AI assistance, from genuinely helpful to genuinely harmful to your development as a researcher.

✅ Proofreading and Grammar

Low Risk

Fixing typos, correcting grammar, adjusting punctuation. You have done all the thinking — the argument is yours, the structure is yours, the voice is yours. AI polishes the surface without touching the substance. This is the equivalent of a spell-checker, and there is no meaningful cognitive cost. Your ideas remain entirely your own; AI simply ensures they are presented without distracting mechanical errors.

📝 Language Enhancement

Low–Medium Risk

Improving clarity, smoothing flow, refining word choice. The ideas and structure remain yours, but AI helps you communicate them more effectively. This is particularly valuable for non-native English speakers (which we will explore in Sub-Lesson 3). The risk is modest: you might lose some of your distinctive voice, and you might not develop your own instincts for clear prose. But the core intellectual work — the thinking — is still yours.

🔨 Restructuring

Medium Risk

AI suggests a better organisation for your arguments. The content is still yours, but AI is now shaping how your arguments are presented — what comes first, how sections relate, where emphasis falls. Structure is not neutral: the order in which you present ideas affects how a reader (and you) understand the relationships between them. When AI restructures your writing, it is making argumentative choices on your behalf, even if you approve them afterward.

💥 Generating Arguments

High Risk

AI produces the reasoning — the claims, the evidence selection, the logical connections. This is where the line between assistance and substitution becomes dangerously blurry. If AI generates an argument, whose argument is it? Can you defend it under questioning? Do you understand why this particular evidence supports this particular claim, or are you trusting that the AI got it right? If you cannot articulate the reasoning without looking at what the AI wrote, the argument is not yours.

🚫 Generating Entire Sections

Very High Risk

AI writes substantial portions of your work — paragraphs, sections, or entire chapters. This is not assistance. This is substitution. The AI has done the thinking, made the structural choices, selected the evidence, and crafted the prose. Your role has been reduced from author to editor-of-someone-else's-work, and the "someone else" is a statistical model that does not understand the content it has produced. You are submitting work that you did not intellectually engage with in any meaningful way.

💡 Where is the line? There is no universal answer to where acceptable assistance ends and problematic substitution begins. Different institutions, journals, and supervisors draw the line differently. What matters is that you are honest with yourself about which side of the line you are on. Ask yourself: if my supervisor asked me to explain every choice in this paragraph — why this structure, why this evidence, why this phrasing — could I do it from my own understanding? If the answer is no, AI has done too much.

👍 When AI Writing Assistance Genuinely Helps

The point of this session is not to argue that AI should never be used for writing. There are genuine cases where AI writing assistance can help researchers produce better work without sacrificing the cognitive benefits of the writing process. The key is to use AI in ways that support your thinking rather than replace it.

💬 Overcoming Writer's Block

Sometimes the hardest part of writing is starting. When you are staring at a blank page, paralysed by the gap between the complexity in your head and the empty document in front of you, AI can serve as a conversation partner. Not to write for you, but to help you think out loud. Explain your ideas to Claude as if to a colleague. Ask it to push back, to ask clarifying questions, to identify where your reasoning is unclear. Use AI as a thinking tool — a way to externalise your thoughts — and then write the actual text yourself.

🌐 Language Polishing for Non-Native Speakers

If English is not your first language (we will explore this in depth in Sub-Lesson 3), AI can help you express ideas that are already fully formed in your mind but difficult to articulate in academic English. This is one of the strongest legitimate use cases: you have done the thinking, you know what you want to say, and AI helps you say it in a language that is not your own. The intellectual work remains entirely yours; the linguistic barrier is lowered.

📝 Summarising Your Own Work

You have written a 10,000-word thesis chapter and need a 200-word abstract. You have produced a technical paper and need a plain-language summary for a public audience. These are translation tasks — expressing the same ideas at different levels of detail or for different audiences. AI can be genuinely useful here because the ideas are already yours; you are not generating new thinking, you are reformatting existing thinking.

🔍 Getting Feedback on Clarity

After you have written a draft yourself, asking AI to identify passages that are unclear, arguments that seem unsupported, or transitions that feel abrupt can be valuable feedback. This is analogous to asking a colleague to read your draft — the thinking and writing are yours, and AI provides a reader's perspective on how effectively you have communicated. The critical difference: you wrote the draft first. The thinking happened. AI responds to your work, rather than generating it.

⚠️ Even in these legitimate cases: You must remain the thinker, not just the editor. Every use case above assumes that you have done the intellectual work first and that AI is supporting the communication of ideas you already understand. The moment AI starts making decisions about what your argument should be, which evidence to use, or how to structure your reasoning, you have crossed from assistance to substitution — regardless of how you frame it to yourself.

📚 Connection to Virtue Ethics

In Week 4, we explored multiple ethical frameworks for thinking about AI in research. One of the most relevant to this week's topic is virtue ethics — the framework that asks not "what should I do?" but "what kind of person am I becoming?"

🧠 What kind of researcher are you becoming? Virtue ethics invites us to consider not just the immediate outcome (a polished essay, a well-structured thesis chapter) but the long-term effect on our character and capabilities. If AI does your thinking-through-writing, you are not developing the intellectual muscles that define expertise. You may produce better output in the short term, but you are becoming a weaker thinker in the long term. Expertise is not just knowing things — it is the capacity to reason through complexity, to synthesise disparate ideas, to construct arguments that hold up under scrutiny. These capacities are developed through practice, and writing is one of the primary forms of that practice.
💡 A practical test from virtue ethics: Before submitting any piece of writing, ask yourself: "If my supervisor sat me down and asked me to explain every claim, every structural choice, and every piece of evidence in this work — from memory, without looking at the text — could I do it?" If the answer is yes, AI has been a tool in service of your thinking. If the answer is no, AI has replaced your thinking. The virtue ethics framework says: the first makes you a better researcher; the second makes you a weaker one.

📚 Readings

Core Readings

📰 Harvard Gazette (2025): "Is AI dulling our minds?"

Explores research on the cognitive effects of AI use, including concerns about reduced deep thinking and intellectual engagement when AI handles cognitively demanding tasks. A broad, accessible introduction to the central concern of this session.

Read the article →

📰 Frontiers in AI (2025): "A conceptual exploration of generative AI-induced cognitive dissonance and its emergence in university-level academic writing"

A peer-reviewed study examining the psychological tension students experience when AI-generated text sounds more polished than their own writing but does not represent their thinking. Introduces the concept of cognitive dissonance in the AI writing context and explores its implications for academic identity and intellectual development.

Read the full paper →

📰 Frontiers in Education (2025): "The impact of generative AI on academic reading and writing: a synthesis of recent evidence (2023–2025)"

A synthesis of recent research evidence on how generative AI is changing academic reading and writing practices. Identifies patterns including student dependency on AI outputs, surface-level engagement, and the tension between productivity gains and learning losses. Essential reading for understanding the landscape of current evidence.

Read the full paper →

Supplementary Readings

📖 Psychology Today (2025): "How AI Impacts Academic Thinking, Writing and Learning"

An accessible overview of the intersection between AI tools and cognitive processes in academic settings. Discusses how reliance on AI for writing tasks may affect the development of critical thinking and independent reasoning skills.

Read the article →

📖 Holmner et al. (2025): "The Future of Academic Writing in the Age of Generative AI"

Published in the Proceedings of the Association for Information Science and Technology (ASIS&T), this paper examines how generative AI is reshaping the landscape of academic writing and what this means for the future of scholarly communication.

Read the paper →

Key Takeaways

  • Writing is a cognitive process, not just a communication tool. The act of writing forces you to clarify, structure, and test your ideas in ways that thinking alone does not. When you skip the writing, you skip a critical form of intellectual development.
  • AI creates a real cognitive dissonance for academic writers. Students and researchers feel torn between text that sounds polished (AI) and text that represents their actual thinking (their own). This tension, if unresolved, can push writers toward dependency rather than development.
  • The "first draft trap" is real. If AI writes your first draft, you edit rather than think. Editing someone else's prose — even AI prose — is fundamentally different from generating your own. The hardest and most valuable cognitive work happens at the beginning, not during the polish.
  • AI writing assistance exists on a spectrum. Proofreading is low risk. Language enhancement is modest risk. Restructuring starts to encroach on your thinking. Generating arguments is high risk. Generating entire sections is substitution, not assistance.
  • Even legitimate uses require you to remain the thinker. Overcoming writer's block, polishing non-native language, summarising your own work, and getting feedback on clarity are all valid uses — but only if you have done the intellectual work first.
  • Virtue ethics asks: what kind of researcher are you becoming? The cumulative effect of offloading cognitive work to AI shapes not just your output but your intellectual capacities. Efficiency is not the same as avoidance. The researcher you become in three years depends on the cognitive work you do (or do not do) today.
👉 Up next: Sub-Lesson 2 — Research Ideation with AI. We move from writing to the earlier stages of the research process: using AI to brainstorm research questions, explore conceptual connections, and develop your ideas. How can AI genuinely help you think — without thinking for you?